This paper presents trajectory planning for three-dimensional autonomous multi-UAV volume coverage and visual inspection based on the Heat Equation Driven Area Coverage (HEDAC) algorithm. The method designs a potential field to achieve the target density and generate trajectories using potential gradients to direct UAVs to regions of a higher potential. Collisions are prevented by implementing a distance field and correcting the agent's directional vector if the distance threshold is reached. The method is successfully tested for volume coverage and visual inspection of complex structures such as wind turbines and a bridge. For visual inspection, the algorithm is supplemented with camera direction control. A field containing the nearest distance from any point in the domain to the structure is designed and this field's gradient provides the camera orientation throughout the trajectory. The bridge inspection test case is compared with a state-of-the-art method where the HEDAC algorithm allowed more surface area to be inspected under the same conditions. The limitations of the HEDAC method are analyzed, focusing on computational efficiency and adequacy of spatial coverage to approximate the surface coverage. The proposed methodology offers flexibility in various setup parameters and is applicable to real-world inspection tasks.
translated by 谷歌翻译
In recent years, reinforcement learning (RL) has become increasingly successful in its application to science and the process of scientific discovery in general. However, while RL algorithms learn to solve increasingly complex problems, interpreting the solutions they provide becomes ever more challenging. In this work, we gain insights into an RL agent's learned behavior through a post-hoc analysis based on sequence mining and clustering. Specifically, frequent and compact subroutines, used by the agent to solve a given task, are distilled as gadgets and then grouped by various metrics. This process of gadget discovery develops in three stages: First, we use an RL agent to generate data, then, we employ a mining algorithm to extract gadgets and finally, the obtained gadgets are grouped by a density-based clustering algorithm. We demonstrate our method by applying it to two quantum-inspired RL environments. First, we consider simulated quantum optics experiments for the design of high-dimensional multipartite entangled states where the algorithm finds gadgets that correspond to modern interferometer setups. Second, we consider a circuit-based quantum computing environment where the algorithm discovers various gadgets for quantum information processing, such as quantum teleportation. This approach for analyzing the policy of a learned agent is agent and environment agnostic and can yield interesting insights into any agent's policy.
translated by 谷歌翻译
Generating realistic 3D worlds occupied by moving humans has many applications in games, architecture, and synthetic data creation. But generating such scenes is expensive and labor intensive. Recent work generates human poses and motions given a 3D scene. Here, we take the opposite approach and generate 3D indoor scenes given 3D human motion. Such motions can come from archival motion capture or from IMU sensors worn on the body, effectively turning human movement in a "scanner" of the 3D world. Intuitively, human movement indicates the free-space in a room and human contact indicates surfaces or objects that support activities such as sitting, lying or touching. We propose MIME (Mining Interaction and Movement to infer 3D Environments), which is a generative model of indoor scenes that produces furniture layouts that are consistent with the human movement. MIME uses an auto-regressive transformer architecture that takes the already generated objects in the scene as well as the human motion as input, and outputs the next plausible object. To train MIME, we build a dataset by populating the 3D FRONT scene dataset with 3D humans. Our experiments show that MIME produces more diverse and plausible 3D scenes than a recent generative scene method that does not know about human movement. Code and data will be available for research at https://mime.is.tue.mpg.de.
translated by 谷歌翻译
Multiple instance learning exhibits a powerful approach for whole slide image-based diagnosis in the absence of pixel- or patch-level annotations. In spite of the huge size of hole slide images, the number of individual slides is often rather small, leading to a small number of labeled samples. To improve training, we propose and investigate different data augmentation strategies for multiple instance learning based on the idea of linear interpolations of feature vectors (known as MixUp). Based on state-of-the-art multiple instance learning architectures and two thyroid cancer data sets, an exhaustive study is conducted considering a range of common data augmentation strategies. Whereas a strategy based on to the original MixUp approach showed decreases in accuracy, the use of a novel intra-slide interpolation method led to consistent increases in accuracy.
translated by 谷歌翻译
The goal of algorithmic recourse is to reverse unfavorable decisions (e.g., from loan denial to approval) under automated decision making by suggesting actionable feature changes (e.g., reduce the number of credit cards). To generate low-cost recourse the majority of methods work under the assumption that the features are independently manipulable (IMF). To address the feature dependency issue the recourse problem is usually studied through the causal recourse paradigm. However, it is well known that strong assumptions, as encoded in causal models and structural equations, hinder the applicability of these methods in complex domains where causal dependency structures are ambiguous. In this work, we develop \texttt{DEAR} (DisEntangling Algorithmic Recourse), a novel and practical recourse framework that bridges the gap between the IMF and the strong causal assumptions. \texttt{DEAR} generates recourses by disentangling the latent representation of co-varying features from a subset of promising recourse features to capture the main practical recourse desiderata. Our experiments on real-world data corroborate our theoretically motivated recourse model and highlight our framework's ability to provide reliable, low-cost recourse in the presence of feature dependencies.
translated by 谷歌翻译
本文描述了我们对第9届论证挖掘研讨会共同任务的贡献(2022)。我们的方法使用大型语言模型来进行论证质量预测的任务。我们使用GPT-3进行及时的工程,并研究培训范式多任务学习,对比度学习和中任务培训。我们发现混合预测设置优于单个模型。提示GPT-3最适合预测论点有效性,而论证新颖性最好通过使用所有三个训练范式训练的模型来估算。
translated by 谷歌翻译
在这项工作中,我们提出了一种神经方法,用于重建描述层次相互作用的生根树图,使用新颖的表示,我们将其称为最低的共同祖先世代(LCAG)矩阵。这种紧凑的配方等效于邻接矩阵,但是如果直接使用邻接矩阵,则可以单独从叶子中学习树的结构,而无需先前的假设。因此,采用LCAG启用了第一个端到端的可训练解决方案,该解决方案仅使用末端树叶直接学习不同树大小的层次结构。在高能量粒子物理学的情况下,粒子衰减形成了分层树结构,只能通过实验观察到最终产物,并且可能的树的大型组合空间使分析溶液变得很棘手。我们证明了LCAG用作使用变压器编码器和神经关系编码器编码器图神经网络的模拟粒子物理衰减结构的任务。采用这种方法,我们能够正确预测LCAG纯粹是从叶子特征中的LCAG,最大树深度为$ 8 $ in $ 92.5 \%\%的树木箱子,最高$ 6 $叶子(包括)和$ 59.7 \%\%\%\%的树木$在我们的模拟数据集中$ 10 $。
translated by 谷歌翻译
我们提出了一种具有动态障碍的生物学启发方法,以避免动态障碍。路径计划是在自组织神经网络(SONN)产生的机器人的凝结配置空间中进行的。机器人本身和静态障碍物以及动态障碍物通过笛卡尔任务空间映射到构造空间,并通过预报的运动学绘制到配置空间。冷凝空间代表了环境的认知图,该图是受位置细胞和哺乳动物大脑认知图的概念的启发。培训数据的产生以及评估是在伴随模拟的实际工业机器人上进行的。为了评估不断变化的环境中无动碰撞在线计划,实现了演示者。然后,对基于样本的计划者进行了比较研究。因此,我们可以证明该机器人能够在动态变化的环境中运行,并在印象0.02秒内重新计划其运动轨迹,从而证明我们概念的实时能力。
translated by 谷歌翻译
虽然从图像中回归3D人类的方法迅速发展,但估计的身体形状通常不会捕获真正的人形状。这是有问题的,因为对于许多应用,准确的身体形状与姿势一样重要。身体形状准确性差姿势准确性的关键原因是缺乏数据。尽管人类可以标记2D关节,并且这些约束3D姿势,但“标记” 3D身体形状并不容易。由于配对的数据与图像和3D身体形状很少见,因此我们利用了两个信息来源:(1)我们收集了各种“时尚”模型的互联网图像,以及一系列的人体测量值; (2)我们为3D身体网眼和模型图像收集语言形状属性。综上所述,这些数据集提供了足够的约束来推断密集的3D形状。我们利用几种新型方法来利用人体测量和语言形状属性来训练称为Shapy的神经网络,从而从RGB图像中回归了3D人类的姿势和形状。我们在公共基准测试上评估shapy,但请注意,它们要么缺乏明显的身体形状变化,地面真实形状或衣服变化。因此,我们收集了一个新的数据集,用于评估3D人类形状估计,称为HBW,其中包含“野生人体”的照片,我们为其具有地面3D身体扫描。在这个新的基准测试中,Shapy在3D身体估计的任务上的最先进方法极大地胜过。这是第一次演示,即可以从易于观察的人体测量和语言形状属性中训练来自图像的3D体形回归。我们的模型和数据可在以下网址获得:shapy.is.tue.mpg.de
translated by 谷歌翻译
了解模型预测在医疗保健方面至关重要,以促进模型正确性的快速验证,并防止利用利用混淆变量的模型。我们介绍了体积医学图像中可解释的多种异常分类的挑战新任务,其中模型必须指示用于预测每个异常的区域。为了解决这项任务,我们提出了一个多实例学习卷积神经网络,AxialNet,允许识别每个异常的顶部切片。接下来我们将赫雷库姆纳入注意机制,识别子切片区域。我们证明,对于Axialnet,Hirescam的说明得到保证,以反映所用模型的位置,与Grad-Cam不同,有时突出不相关的位置。使用一种产生忠实解释的模型,我们旨在通过一种新颖的面具损失来改善模型的学习,利用赫克斯克姆和3D允许的区域来鼓励模型仅预测基于器官的异常,其中出现的异常。 3D允许的区域通过新方法,分区自动获得,其组合从放射学报告中提取的位置信息与通过形态图像处理获得的器官分割图。总体而言,我们提出了第一种模型,用于解释容量医学图像中的可解释的多异常预测,然后使用掩模损耗来实现36,316扫描的Rad-Chessct数据集中多个异常的器官定位提高33%,代表状态本领域。这项工作推进了胸部CT卷中多种异常模型的临床适用性。
translated by 谷歌翻译